Search Results: "killer"

28 February 2014

Russell Coker: Links February 2014

The Economist has an interesting and informative article about the lack of reproducability of scientific papers and the implications for scientific research [1]. Regina Dugan gave an interesting TED talk about some of the amazing DARPA projects [2]. Chris Anderson interviewed Elon Musk about the Tesla cars, SpaceX, and his new venture Solar City [3]. Elon has a lot of great ideas for improving humanity while also making money. Smart Planet has an interesting article about Bhutan s switch to electric vehicles [4]. Paul Piff gave an insightful and well researched TED talk about the ways that money makes people mean [5]. Maryn McKenna wrote an interesting article for Wired about what happens when the current anti-biotics stop working [6]. Unfortunately she lists increasing food prices as a consequence, really the unreasonably low price of meat is due to the misuse of anti-biotics that is causing this problem. Linda Walther Tirado wrote an interesting article about being poor titled Why I Make Terrible Decisions, or, Poverty Thoughts [7]. It gives a real insight into the situation of people who are trapped in poverty. When someone who is as obviously intelligent as Linda feels that it s impossible to escape poverty there is a real problem in the system. While Australia doesn t suck nearly as badly as the US in this regard (higher minimum wage and better health care) we still need to improve things, I know people in Australia who s experience bears some similarity to Linda s. Maxwell Neely-Cohen wrote an interesting article about peer pressure [8]. Some of the conclusions are dubious, but the ideas on the way the Internet changes peer relationships in high school are interesting. An English pediatrician wrote an article for The Daily Beast about why he won t accept anti-vac clients [9]. There are some decent people in the Liberal Party, Liberal MP Warren Entsch attacks Cory Bernardi on gay obsession [10]. AFAIK we haven t yet had a gay sex scandal involving a homophobic Australian politician

10 February 2014

Mario Lang: Neurofunkcasts

I have always loved Drum and Bass. In 2013 I rediscovered my love for Darkstep and Neurofunk, and found that these genres have developed quite a lot in the recent years. Some labels like Black Sun Empire and Evol Intent produce mixes/sets on a regular basis as podcasts these days. This article aggregates some neurofunk podcasts I like a lot, most recent first. Enjoy 33 hours and 57 minutes of fun with dark and energizing beats. Thanks to BSE Contrax and Evol Intent for providing such high quality sets. You can also see the Python source for the program that was used to generate this page.

4 January 2014

Simon Josefsson: Necrotizing Fasciitis

Dear World, On the morning of December 24th I felt an unusual pain in my left hand between the thumb and forefinger. The pain increased and in the afternoon I got a high fever, at some point above 40 degrees Celsius or 104 degree Fahrenheit. I went to the emergency department and was hospitalized during the night between the 24th and 25th of December. On the afternoon of December 26th I underwent surgery to find out what was happening, and was then diagnosed with Necrotizing Fasciitis (the wikipedia article on NF gives a fair summary), caused by the common streptococcus bacteria (again see wikipedia article on Streptococcus). A popular name for the disease is flesh-eating bacteria. Necrotizing Fasciitis is a rare and aggresive infection, often deadly if left untreated, that can move through the body at speeds of a couple of centimeters per hour. I have gone through 6 surgeries, leaving wounds all over my left hand and arm. I have felt afraid of what the disease will do to me, anxiety over what will happen in the future, confusion and uncertainty about how a disease like this can exist and that I get the right treatment since so little appears to be known about it. The feeling of loneliness and that nobody is helping, or even can help, has also been present. I have experienced pain. Even though pain is something I m less afraid of (I have a back problem) compared to other feelings, I needed help from several pain killers. I ve received normal Paracetamol, stronger NSAID s (e.g., Ketorolac/Toradol), several Opiate pain-killers including Alfentanil/Rapifen, Tramadol/Tradolan, OxyContin/OxyNorm, and Morphine. After the first and second surgery, nothing helped and I was still screaming with pain and kicking the bed. After the first surgery, I received a local anesthetic (a plexus block). After the second surgery, the doctors did not want to masquerade my pain, because sign of pain indicate further growth of the infection, and I was given the pain-dissociative drug Ketamine/Ketalar and the stress-releasing Clonidine/Catapresan. Once the third surgery removed all of the infection, pain went down, and I experienced many positive feelings. I am very grateful to be alive. I felt a strong sense of inner power when I started to fight back against the decease. I find joy in even the simplest of things, like being able to drink water or seeing trees outside the window. I cried out of happiness when I saw our children s room full of toys. I have learned many things about the human body, and I am curious by nature so I look forward to learn more. I hope to be able to draw strength from this incident, to help me prioritize better in my life. My loving wife sa has gone through a nightmare as a consequence of my diagnosis. At day she had to cope with daily life taking care of our wonderful 1-year old daughter Ingrid and 3-year old boy Alfred. All three of them had various degrees of strep throat with fever, caused by the same bacteria and anyone with young kids know how intense that alone can be. She gave me strength over the phone. She kept friends and relatives up to date about what happened, with the phone ringing all the time. She worked to get information out from the hospital about my status, sometimes being rudely treated and just being hanged up on. After a call with the doctor after the third surgery, when the infection had spread from the hand to within 5cm of my torso, she started to plan for a life without me. My last operation were on Thursday January 2nd and I left hospital the same day. I m writing this on the Saturday of January 4rd, although some details and external links have been added after that. I have regained access to my arm and hand and doing rehab to regain muscle control, while my body is healing. I m doing relaxation exercises to control pain and relax muscles, and took the last strong drug yesterday. Currently I take antibiotics (more precisely Clindamycin/Dalacin) and the common Paracetamol-based pain-killer Alvedon together with on-demand use of an also common NSAID containing Ibuprofen (Ipren). My wife and I were even out at a restaurant tonight. Fortunately I was healthy when this started, and with bi-weekly training sessions for the last 2 years I was physically at my strongest peak in my 38 year old life (weighting 78kg or 170lb, height 182cm or 6 feet). I started working out to improve back issues, increase strength, and prepare for getting older. Exercise has never been my thing although I think it is fun to run medium distances (up to 10km). I want thank everyone who helped me and our family through this, both professionally and personally, but I don t know where to start. You know who you are. You are the reason I m alive. Naturally, I want to focus on getting well and spend time with my family now. I don t yet know to what extent I will recover, but the prognosis is good. Don t expect anything from me in the communities and organization that I m active in (e.g., GNU, Debian, IETF, Yubico). I will come back as energy, time and priorities permits. flattr this!

21 November 2013

Gunnar Wolf: Back aches...

This year, I had slowly taken again running. It's an activity I enjoy, even though I'm far from the condition I had when I did it every day My maximum was running about 8Km four days a week (ocassionally up to 15Km, say, on weekends)... But I slowly drifted out of it over the last three years. Yes, I have taken it up now and then, but dropped out again after a few weeks. Well, this year it felt I was getting back in the routine. Not without some gaps, but I had ran most weeks since July at least once, but usually two times. I started slowly, doing about 3Km close to home, but didn't take much to go back to my usual routes inside UNAM, doing a modest average of 4.5-5Km per run, and averaging somewhat over 8Km/h (6.5 min/Km). But... Well, I cannot relate precisely what happened on this Exactly a month ago, after a very average run (even a short one, 3.6Km), I had a lower back ache. Didn't pay too much attention to it, but it was strange since the first day, as I often have upper back aches Never lower. The pain came and went slowly several times. I kept cycling to work, although not every day. On October 27th, I even took the time to do the Ciclot n, a nice 38Km (including the ~5Km distance from home) around central Mexico City. I enjoyed the ride, even though every stop and re-start of the bike was a bit painful what hurts is to get off and on the bike: The posture change. But the pain has been almost constantly there. When I stand up, or when I sit down, it's about five minutes until the body gets used to the new position and stops hurting. Another strange data point: This last weekend (together with a national holiday) we went to a small conference in Guadalajara, and then to visit our friends in Guanajuato. So, we spent 12 hours on the bus there (12 Yes, there was an accident on the road, and we could not pass... So many hours were wasted there, and going back to a junction to take an alternative, longer road), then one night in a hotel bed, and two nights in our friends' guest mattresses which are not precisely luxury-grade. And Sunday night... I had no pain at all! Came back home, and after only one night back in our bed... I just could not move. I had the strongest pain so far. Could not even walk without some help. We went to the orthopedist a friend recommended, and I was seriously bending my posture: While that part of my posture is usually stable, my right hip was about 2cm higher than the left one, and my shoulders were almost 7cm displaced from my hips! So... Well, I'm having a cocktail of painkillers and antiinflamatories. The doctor says next week he wants me to have a tomography taken to better understand the causes for this. And, of course, tomorrow I'm leaving for Festival de Software Libre in Puerto Vallarta. I'm going there by plane, so no big hassle. But the way back, it will be 12 more hours on a bus. I'm... not precisely looking forward to this bus ride :( Anyway... I sorely miss my bike+running :(

Petter Reinholdtsen: All drones should be radio marked with what they do and who they belong to

Drones, flying robots, are getting more and more popular. The most know ones are the killer drones used by some government to murder people they do not like without giving them the chance of a fair trial, but the technology have many good uses too, from mapping and forest maintenance to photography and search and rescue. I am sure it is just a question of time before "bad drones" are in the hands of private enterprises and not only state criminals but petty criminals too. The drone technology is very useful and very dangerous. To have some control over the use of drones, I agree with Daniel Suarez in his TED talk "The kill decision shouldn't belong to a robot", where he suggested this little gem to keep the good while limiting the bad use of drones:
Each robot and drone should have a cryptographically signed I.D. burned in at the factory that can be used to track its movement through public spaces. We have license plates on cars, tail numbers on aircraft. This is no different. And every citizen should be able to download an app that shows the population of drones and autonomous vehicles moving through public spaces around them, both right now and historically. And civic leaders should deploy sensors and civic drones to detect rogue drones, and instead of sending killer drones of their own up to shoot them down, they should notify humans to their presence. And in certain very high-security areas, perhaps civic drones would snare them and drag them off to a bomb disposal facility. But notice, this is more an immune system than a weapons system. It would allow us to avail ourselves of the use of autonomous vehicles and drones while still preserving our open, civil society.
The key is that every citizen should be able to read the radio beacons sent from the drones in the area, to be able to check both the government and others use of drones. For such control to be effective, everyone must be able to do it. What should such beacon contain? At least formal owner, purpose, contact information and GPS location. Probably also the origin and target position of the current flight. And perhaps some registration number to be able to look up the drone in a central database tracking their movement. Robots should not have privacy. It is people who need privacy.

1 September 2013

Eddy Petrișor: Integrating Beyond Compare with Semanticmerge

Note: This post will probably not be on the liking of those who think free software is always preferable to closed source software, so if you are such a person, please take this article as an invitation to implement better open source alternatives that can realistically compete with the closed source applications I am mentioning here. I am not going to mention here where the open source alternatives are not up to the same level as the commercial tools, I'll leave that for the readers or for another article.



Semanticmerge is a merge tool that attempts to do the right thing when it comes to merging source code. It is language aware and it currently supports Java and C#. Just today the creators of the software have started working on the support for C.

Recently they added Debian packages, so I installed it on my system. For open source development Codice Software, the creators of Semanticmerge, offers free licenses, so I decided to ask for one today, and, although is Sunday, I received an answer and I will get my license on Monday.

When a method is moved from one place to another and changed in a conflicting way in two parallel development lines, Semanticmerge can isolate the offending method and can pass all its incarnations (base, source and destination or, if you prefer, base, mine and theirs) to a text based merge tool to allow the developer to decide how to resolve the merge. On Linux, the Semanticmerge samples are using kdiff3 as the text-based merge tool, which is nice, but I don't use kdiff3, I use Meld, another open source visual tool for merges and comparisons.


OTOH, Beyond Compare is a merge and compare tool made by Scooter Software which provides a very good text based 3-way merge with a 3 sources + 1 result pane, and can compare both files and directories. Two of its killer features is having the ability split differences into important and non-important ones according to the syntax of the compared/merged files, and the ability to easily change or add to the syntax rules in a very user-friendly way. This allows to easily ignore changes in comments, but also basic refactoring such as variable renaming, or other trivial code-wide changes, which allows the developer to focus on the important changes/differences during merges or code reviews.

Syntax support for usual file formats like C, Java, shell, Perl etc. is built in (but can be modified, which is a good thing) and new file types with their syntaxes can be added via the GUI from scratch or based on existing rules.

I evaluated Beyond Compare at my workplace and we decided it would be a good investment to purchases licenses for it for the people in our department.


Having these two software separate is good, but having them integrated with each other would be even better. So I decided I would try to see how it can be done. I installed Beyond compare on my system, too and looked through the examples


The first thing I discovered is that the main assumption of Semanticmerge developers was that the application would be called via the SCM when merges are to be done, so passing lots of parameters would not be problem. I realised that when I saw how one of the samples' starting script invoked semantic merge:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"" -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\"" -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""
Can you see the problem? It seems Semanticmerge has no persistent knowledge of the user preferences with regards to the text-based merge tool and exports the issue to the SCM, at the price of overcomplicating the command line. I already mentioned this issue in my license request mail and added the issue and my fix suggestion in their voting system of features to be implemented.

The upside was that by comparing the command line for kdiff3 invocations, the kdiff3 documentation and, by comparison, the Beyond Compare SCM integration information, I could deduce what is the command line necessary for Semanticmerge to use Beyond Compare as an external merge and diff tool.

The -edt, -emt and -e2mt options are the ones which specify how the external diff tool, external 3-way merge tool and external 2-way merge tool is to be called. Once I understood that, I split the problem in its obvious parts, each invocation had to be mapped, from kdiff3 options to beyond compare options, adding the occasional bell and whistle, if possible.

The parts to figure out, ordered by compexity, were:

  1. -edt="kdiff3 \"#sourcefile\" \"#destinationfile\"

  2. -e2mt="kdiff3 \"#sourcefile\" \"#destinationfile\" -o \"#output\""

  3. -emt="kdiff3 \"#basefile\" \"#sourcefile\" \"#destinationfile\" --L1 \"#basesymbolic\" --L2 \"#sourcesymbolic\" --L3 \"#destinationsymbolic\" -o \"#output\""

Semantic merge integrates with kdiff3 in diff mode via the -edt option. This was easy to map to Beyond Compare, I just replaced kdiff3 with bcompare:
-edt="bcompare \"#sourcefile\" \"#destinationfile\""
Integration for 2-way merges was also quite easy, the mapping to Beyond Compare was:
-e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\""
For the 3-way merge I was a little confused because the Beyond Compare documentation and options were inconsistent between Windows and Linux. On Windows, for some of the SCMs, the options that set the titles for the panes are '/title1', '/title2', '/title3' and '/title4' (way too descriptive for my taste /sarcasm), but for some others are '/lefttitle', '/centertitle', '/righttitle', '/outputtitle', while on Linux the options are the more explicit kind, but with a '-' instead of a '/'.

The basic things were easy, ordering the parameters in the 'source, destination, base, output' instead of kdiff3's 'base, source, destination, -o ouptut', so I wanted to add the bells and whistles, since it really makes more sense for the developer to see something like 'Destination: [method] readOptions' instead of '/tmp/tmp4327687242.tmp', and because that's exactly what is necessary for Semanticmerge when merging methods, since on conflicts the various versions of the functions are placed in temporary files which don't mean anything.

So, after some digging into the examples from Beyond Compare and kdiff3 documentation, I ended up with:
-emt="bcompare \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'"

Sadly, I wasn't able to identify the symbolic name for the output, so I added the hard coded 'merge result'. If Codice people would like to help with with this information (or if it exists), I would be more than willing to do update the information and do the necessary changes.

Then I added the bells and whistles for the -edt and -e2mt options, so I ended up with an even more complicated command line. The end result was this monstrosity:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java -edt="bcompare \"#sourcefile\" \"#destinationfile\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'" -emt="bcompare \"#sourcefile\" \"#destinationfile\" \"#basefile\" \"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic' -centertitle='#basesymbolic' -outputtitle='merge result'" -e2mt="bcompare \"#sourcefile\" \"#destinationfile\" -savetarget=\"#output\" -lefttitle='#sourcesymbolic' -righttitle='#destinationsymbolic'"
So when I 3-way merge a function I get something like this (sorry for high resolution, lower resolutions don't do justice to the tools):



I don't expect this post to remain relevant for too much time, because after sending my feedback to Codice, they were open to my suggestion to have persistent settings for the external tool integration, so, in the future, the command line could probably be as simple as:
semanticmergetool -s=$sm_dir/src.java -b=$sm_dir/base.java -d=$sm_dir/dst.java -r=/tmp/semanticmergetoolresult.java
And the integration could be done via the GUI, while the command line can become a way to override the defaults.

24 August 2013

Marko Lalic: PTS rewrite: Django Memory Usage


Rewriting the PTS in a fairly large Web framework such as Django is sure to cause larger memory consumption by even the more simple tasks, when compared to a couple of (loosely related) perl scripts. If anything, simply due to the fact that the entire Django machinery needs to be loaded.

However, just how large the memory consumption difference between the two can be seen in the example of the command to dispatch received package emails to users. In the new version, the command is implemented as a Django management command, whereas before it was a perl script dispatch.pl relying on the common.pl script for some additional functions.

The reason this command's memory usage is important is an incident with the current test deployment of the new PTS found at http://pts.debian.net. All package mails that the old PTS receives are also forwarded to the new instance in order to expose the new implementation to real-world mail traffic. At a certain point, over a hundred mails were received instantly causing over a hundred dispatch processes to be launched. This led to the system running out of memory and the kernel had to run the OOM Killer which brought down PostgreSQL server thereby causing the site to become unavailable.

For now, such a problem seems to have been prevented from reoccurring by setting up exim to queue messages when a certain system average load is exceeded, however it is interesting to check how large the memory consumption difference between the old and new dispatch really is.

Measurement setupIn order to measure the memory use, two messages of different sizes were used: 1KB and 1MB.

For the new PTS two cases were considered, when the database used is sqlite3 and PostgreSQL. In both cases, it was simply initialized by syncdb, meaning it contained only the default keywords found in the initial fixture and no registered users and subscriptions. The DEBUG setting was set to False.

For testing the old PTS's dispatch.pl, the database was also initialized empty.

This way, the tests should show the actual difference between the two implementations.

Since the main concern here is the maximum memory usage of the dispatch script, a simple bash script was written which takes a PID of a running process and outputs the maximum memory usage as reported by ps once the process is terminated. The script used is shown below.

#!/usr/bin/env bash

pid=$1

while ps $pid >/dev/null
do
ps -o vsz= $ pid
sleep 0.1
done sort -n tail -n1

Old PTS Measurements
The maximum memory usage of the old PTS was:
  1. 37.8 MB for the 1 KB message
  2. 38.2 MB for the 1 MB message

New PTS Measurements

sqlite3
The maximum memory usage of the new PTS using sqlite3 was:
  1. 91.77 MB for the 1 KB message
  2. 92.59 MB for the 1 MB message

PostgreSQL

The maximum memory usage of the new PTS using PostgreSQL was:
  1. 148 MB for the 1 KB message
  2. 160 MB for the 1 MB message

Comparison and DiscussionWhen comparing the memory used by the old dispatch.pl implementation and the new PTS running sqlite3, the difference does not seem to be too large and is to be expected, since, as mentioned, the whole Django framework is loaded when executing a management command.

However, the huge difference between the two versions of the new PTS: the one using sqlite3 and the one using PostgreSQL is very surprising indeed.

Another interesting measurement is the difference between running a bare Django management command: one including only a sleep statement (in order to allow enough time to measure its memory usage) on PostgreSQL and sqlite3. In both cases, the memory usage was only about 1 MB less than what the respective maximum memory usage when processing the 1 KB message.

All this considered, the logical conclusion seems to be that either psycopg2 package and/or the Django postgresql_psycopg2 database engine use a lot more memory than the corresponding sqlite3 alternatives.

Could anyone shed some more light as to what causes this stark difference between using sqlite3 and Postgres in Django? Anything that could be done to mitigate it?

3 July 2013

Petter Reinholdtsen: Fourth alpha release of Debian Edu/Skolelinux based on Debian Wheezy

The fourth wheezy based alpha release of Debian Edu was wrapped up today. This is the release announcement: New features for Debian Edu 7.1+edu0~alpha3 released 2013-07-03 These are the release notes for for Debian Edu / Skolelinux 7.1+edu0~alpha3, based on Debian with codename "Wheezy". About Debian Edu and Skolelinux Debian Edu, also known as Skolelinux, is a Linux distribution based on Debian providing an out-of-the box environment of a completely configured school network. Immediately after installation a school server running all services needed for a school network is set up just waiting for users and machines being added via GOsa , a comfortable Web-UI. A netbooting environment is prepared using PXE, so after initial installation of the main server from CD, DVD or USB stick all other machines can be installed via the network. The provided school server provides LDAP database and Kerberos authentication service, centralized home directories, DHCP server, web proxy and many other services. The desktop contains more than 60 educational software packages and more are available from the Debian archive, and schools can choose between KDE, Gnome, LXDE and Xfce desktop environment. This is the fourth test release based on Debian Wheezy. Basically this is an updated and slightly improved version compared to the Squeeze release. Software updates Other changes Known issues Where to get it To download the multiarch netinstall CD release you can use The MD5SUM of this image is: 2b161a99d2a848c376d8d04e3854e30c
The SHA1SUM of this image is: 498922e9c508c0a7ee9dbe1dfe5bf830d779c3c8 To download the multiarch USB stick ISO release you can use The MD5SUM of this image is: 25e808e403a4c15dbef1d13c37d572ac
The SHA1SUM of this image is: 15ecfc93eb6b4f453b7eb0bc04b6a279262d9721 How to report bugs http://wiki.debian.org/DebianEdu/HowTo/ReportBugs

1 July 2013

Michael Stapelberg: Survey answers part 2: the transition

This blog post is the second of a series of posts dealing with the results of the Debian systemd survey. I intend to give a presentation at DebConf 2013, too, so you could either read my posts, or watch the talk, or both :-). It seems that it is unclear how Debian s transition to systemd is intended to work. By transition , we mean going from the current state (sysvinit is the default and fully supported) to systemd is fully supported. Then by merely installing systemd by default and letting it provide /sbin/init, we can make it the default init system. If and when that happens is a different matter and it s not necessary for all packages to have systemd support. sysvinit compatibility systemd natively supports sysvinit scripts, meaning your existing package will work as-is but you cannot utilize all the features that systemd provides. The sysvinit support works very well, as you can try in a fresh Debian wheezy VM. In the output of systemctl list-units , every entry which has an LSB: prefix is actually a sysvinit script. The mechanism with which systemd decides whether to use an init script or a service file is by looking whether a service file with a corresponding name exists. That is, if e.g. apache2.service exists, systemd will prefer it over /etc/init.d/apache2. To make this crystal clear: it is not necessary to ship service files for all services in some kind of flag day. systemd supports a mixed installation where some services use init scripts and some services use service files. Adding systemd support to your package In a nutshell, it usually works like this:
  1. Install a service file to /lib/systemd/system/foo.service.
    Often, upstream already provides and installs a .service file.
    If not, you can place your file at debian/package.service
    Make sure that your service file name corresponds to the sysvinit script name
    (e.g. apache2.service for /etc/init.d/apache2)
  2. Ensure your service file(s) are enabled and started.
    We strongly recommend you to use our package dh-systemd.
    If you use dh(1), add --with=systemd in debian/rules and Build-Dep on dh-systemd
  3. Test your package, see the next section.
For details see wiki.debian.org/Systemd/Packaging. Testing systemd We carefully made sure that you can install the systemd Debian package on your machine alongside sysvinit without breaking anything. The systemd package does not conflict with any other packages, it will not replace /sbin/init and systemd will not be enabled right away. It is only after you specify the kernel parameter init=/bin/systemd in /etc/default/grub that you switch to systemd. In case you want to go back, simply boot without this kernel parameter. Conclusion In conclusion, the transition is straight-forward and the necessary infrastructure is in place. systemd is available in Debian and can be used today. Packages can add systemd support whenever their maintainer(s) feel like it. There is no need for a flag day. We can switch the default whenever we think we are ready.

30 June 2013

Russell Coker: Links June 2013

Cory Doctorow published a letter from a 14yo who had just read his novel Homeland [1]. I haven t had anything insightful to say about Aaron Swartz, so I think that this link will do [2]. Seth Godin gave an interesting TED talk about leading tribes [3]. I think everyone who is active in the FOSS community should watch this talk. Ron Garrett wrote an interesting post about the risk of being hit by a dinosaur killer [4]. We really need to do something about this and the cost of defending against asteroids is almost nothing compared to defence spending. Afra Raymond gave an interesting TED talk about corruption [5]. He focussed on his country Trinidad and Tobago but the lessons apply everywhere. Wikihouse is an interesting project that is based around sharing designs for houses that can be implemented using CNC milling machines [6]. It seems to be at the early stages but it has a lot of potential to change the building industry. Here is a TED blog post summarising Dan Pallotta s TED talk about fundraising for nonprofits [7]. His key point is that moral objections to advertising for charities significantly reduce their ability to raise funds and impacts the charitable mission. I don t entirely agree with his talk which is very positive towards spending on promotion but I think that he makes some good points which people should consider. Here is a TED blog post summarising Peter Singer s TED talk about effective altruism [8]. His focus seems to be on ways of cheaply making a significant difference which doesn t seem to agree with Dan Pallotta s ideas. Patton Oswalt wrote an insightful article about the culture of stand-up comedians which starts with joke stealing and heckling and ends with the issue of rape jokes [9]. Karen Eng wrote an interesting TED blog post about Anthony Vipin s invention of HAPTIC shoes for blind people [10]. The vibration of the shoes tells the person which way to walk and a computer sees obstacles that need to be avoided. David Blaine gave an interesting TED talk about how he prepared for a stunt of holding his breath for 17 minutes [11].

4 March 2013

Lucas Nussbaum: Debian is (still) changing

(Looking for those graphs online, I realized that I never properly published them, besides that old post) I ve been playing with snapshot.d.o, which is a fantastic resource if you want to look at Debian from an historical perspective (well, since 2005 at least). Team maintenance comaint
We now have more team-maintained packages than packages maintained by someone alone. Interestingly, the small, ad-hoc group of developers model does not really take off. Maintenance using a VCS vcs A large majority of our packages are maintained in a VCS repository, with Git being the clear winner now. Possible goal for Jessie: standardize on a Git workflow, since every team tends to design its own? Packaging helpers helpers Again, we have a clear winner here, with dh. It s interesting to note that, while dh was designed as a CDBS killer, it kind-of fails in that role. Possible goal for Jessie: deprecate at least pure-debhelper packaging? Patch systems and packaging formats formats-patches Again, clear winner with 3.0 (quilt). The (dirty) scripts that generate those graphs are available in Git (but you need to connect to stabile to execute them, and it s rather time consuming hours/days).

30 January 2013

Tanguy Ortolo: Using the UDF as a successor of FAT for USB sticks

USB Stick FAT USB sticks are traditionally formatted with FAT 32, because this file system is implemented by almost every operating system and device. Unfortunately, it sucks, as it cannot use more than 2 TiB, store files larger than 2 GiB or store symbolic links for instance. In a word, it is an obsolete and deficient file system.

exFAT Good news: someone addressed that problem. Bad new: that someone is Microsoft. So as you could expect, exFAT, the extended FAT, is a stinking proprietary, secret and patented file system. There are free implementations of that shit, but it is safer to stay away from it.UDF to the rescue! Good news: there is one file system that is implemented almost everywhere as well, and which does not suffer from such limitations. UDF, the Universal Disk Format, is an ISO standard originally designed for DVDs, but it is perfectly usable for USB sticks. It also supports POSIX permissions, with one killer feature for removable media: a file can belong to no specific person or group. So, to use it, assuming you USB stick is /dev/sdc:
$ dd if=/dev/zero of=/dev/sdc bs=1M count=1
$ mkudffs -b 512 --media-type=hd /dev/sdc
The initial dd is to erase the existing partition table or file system information, to prevent you USB stick from being detected as a FAT after it has been formatted with UDF. The -b 512 is to force a file system block size equal to the USB stick's physical block size, as required by the UDF specification. Adapt it if you have the luck of having a USB stick with an more appropriate block size. After that, you USB stick will be usable for reading and writing with GNU/Linux and the other free operating systems of course, but also with current versions of Windows (read-only with the outdated version XP) and with MacOS.

21 December 2012

Tim Retout: Perl Forking, Reference Counting and Copy-on-Write

I have been dealing with an interesting forking issue at work. It happens to involve Perl, but don't let that put you off. So, suppose you need to perform an I/O-bound task that is eminently parallelizable (in our case, generating and sending lots of emails). You have learnt from previous such attempts, and broken out Parallel::Iterator from CPAN to give you easy fork()ing goodness. Forking can be very memory-efficient, at least under the Linux kernel, because pages are shared between the parent and the children via a copy-on-write system. Further suppose that you want to generate and share a large data structure between the children, so that you can iterate over it. Copy-on-write pages, should be cheap, right?
my $large_array_ref = get_data();
my $iter = iterate( sub  
    my $i = $_[1];
    my $element = $large_array_ref->[$i];
    ...
 , [0..1000000] );
Sadly, when you run your program, it gobbles up memory until the OOM killer steps in. Our first problem was that the system malloc implementation was less good for this particular task than Perl's built-in malloc. Not a problem, we were using perlbrew anyway, so a quick few experimental rebuilds later and this was solved. More interesting was the slow, 60MB/s leak that we saw after that. There were no circular references, and everything was going out of scope at the end of the function, so what was happening? Recall that Perl uses reference counting to track memory allocation. In the children, because we took a reference to an element of the large shared data structure, we were effectively writing to the relevant page in memory, so it would get copied. Over time, as we iterated through the entire structure, the children would end up copying almost every page! This would double our memory costs. (We confirmed the diagnosis using 'smem', incidentally. Very useful.) The copy-on-write semantics of fork() do not play well with reference-counted interpreted languages such as Perl or CPython. Apparently a similar issue occurs with some mark-and-sweep garbage-collection implementations - but Ruby 2.0 is reputed to be COW-friendly. All was not lost, however - we just needed to avoid taking any references! Implement a deep copy that does not involve saving any intermediate variables along the way. This can be a bit long-winded, but it works.
my $large_array_ref = get_data();
my $iter = iterate( sub  
    my $i = $_[1];
    my %clone;
    $clone id   = $large_array_ref->[$i] id ;
    $clone foo  = $large_array_ref->[$i] foo ;
    ...
 , [0..1000000] );
This could be improved if we wrote an XS CPAN module that cloned data structures without incrementing any reference counts - I presume this is possible. We tried the most common deep-copy modules from CPAN, but have not yet found one that avoids reference counting. This same problem almost certainly shows up when using the Apache prefork MPM and mod_perl - even read-only global variables can become unshared. I would be very interested to learn of any other approaches people have found to solve this sort of problem - do email me.

5 December 2012

Russell Coker: Cheap Android Tablet from Aldi

back of the onix 7 inch tabletfront of the onix 7 inch tablet and cover of manual I ve just bought a 7 Onix tablet from Aldi. It runs Android 4.0.4, has a 1GHz Cortex A8 CPU, 512M of RAM, 16G of flash storage, and a 800*480 display. They are selling rapidly and I don t know how long they will last probably you could get a returned one next week if you can t get one today. But if you like pink then you may be able to get one (the black ones are selling out first). The tablet seems like a nice piece of hardware, solid construction and it feels nice to hold. Mine has a minor screen defect, but that s the sort of
thing you expect from a cheap device, apart from that the display is good. The Wifi doesn t seem to have as good a range as some other devices (such as my phones and the more expensive 10 tablet I got from Kogan). This isn t a
problem for me (the data intensive uses for this device will be in the same room as the AP) but could be a killer for some people. If you have your phone or a dedicated 3G Wifi AP in your pocket while using the tablet then it should be fine, but if you have an AP at the wrong end of your house then you could be in trouble. I found Youtube unusable due to slow downloading even when sitting next to my AP but I can play videos downloaded from iView that are on my local web server (which is more important to me). I expect that I will be able to play local copies of TED talks too. The camera is bad by phone camera standards, fortunately I have no interest in using a tablet as a camera. I had no real problems with the Google Play store (something that caused problems for some users of an earlier Aldi Android tablet). Generally the tablet works well. The people who build Android for modern devices seem remarkably stupid when it comes to partitioning, every device I ve seen has only a small fraction of the storage usable for apps. This tablet is the worst I ve ever seen, it has 16G of storage of which there is 512M partitioned for apps of which only 400M is free when you first get the device! It comes pre-installed with outdated versions of the Facebook client and Google Maps (which isn t very useful on a Wifi device) and some other useless things. If you upgrade them to the latest versions then you ll probably lose another 100M of the 512M! Fortunately the Android feature to run apps from the VFAT partition works so I haven t been prevented from doing anything by this problem yet. In conclusion, it s not the greatest Android tablet. But you don t expect a great tablet for $100. What I hoped for was a somewhat low spec tablet that works reasonably well and that s what I got. I m happy.

20 November 2012

Jon Dowland: Waterstones

In May this year, in a desperate bid to bail water out of a sinking ship, HMV group sold off the Waterstones chain of bookstores. I'm fond of reading so naturally I visit Waterstones from time to time. (but not exclusively.) For many years, their promotions were unanimously structured as "3 for 2" offers: find three books in the promotion, in-store, and get the cheapest of the three for free. I hated these promotions. I always found it very challenging to find three books in the promotion that I wanted. If I was visiting to buy a specific book and found it in the promotion, on more than one occasion I left the shop empty handed: I wasn't prepared to pay full price for a book in a promotion and not get the benefit of the promotion. Since being sold, Waterstones seem to have made a lot of changes. Many new hardbacks from popular authors have debuted at a discount for a short period (Rowling, Pratchett). I've found many more discounted books not tied to others in 3-for-2-style offers, sometimes window-display stock at half price, a month or so after release. They started a loyalty-card scheme, with a 10 gift card the prize you reach once you've spent 100. (I wonder if such schemes are examples of a token economy?) Finally, they've tentatively started to explore putting cafes in their branches. The one in Newcastle is best described as 'fledgling', but it's nice to have one more place to escape for a cup of tea at lunchtimes. The cafe, as well as a very well stocked collection of magazines, were the two killer features of the sadly- defunct Borders bookseller chain. For years before we got our own Borders in the North East, I used to enjoy the York branch if I was passing. I actually managed to fill a loyalty card this year, so clearly their change in attitude has worked on me! I'm generally not a fan of gift cards, especially in today's climate, so I quickly spent it. I hold an American Express credit card, which earns me cashbask on purchases which I make with it. I think the rate is about 1%. However, I can opt to receive the cashback in the form of gift vouchers. If I do so, I get them to the value of 110% my cashback balance. One of the retailers in on the scheme are Waterstones. I'm tempted

20 October 2012

Vincent Bernat: Network lab with KVM

To experiment with network stuff, I was using UML-based network labs. Many alternatives exist, like GNS3, Netkit, Marionnet or Cloonix. All of them are great viable solutions but I still prefer to stick to my minimal home-made solution with UML virtual machines. Here is why: The use of UML had some drawbacks: However, UML features HostFS, a filesystem providing access to any part of the host filesystem. This is the killer feature which allows me to not use any virtual disk image and to get access to my home directory right from the guest. I discovered recently that KVM provided 9P, a similar filesystem on top of VirtIO, the paravirtualized IO framework.

Setting up the lab The setup of the lab is done with a single self-contained shell file. The layout is similar to what I have done with UML. I will only highlight here the most interesting steps.

Booting KVM with a minimal kernel My initial goal was to experiment with Nicolas Dichtel s IPv6 ECMP patch. Therefore, I needed to configure a custom kernel. I have started from make defconfig, removed everything that was not necessary, added what I needed for my lab (mostly network stuff) and added the appropriate options for VirtIO drivers:
CONFIG_NET_9P_VIRTIO=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_RING=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
No modules. Grab the complete configuration if you want to have a look. From here, you can start your kernel with the following command ($LINUX is the appropriate bzImage):
kvm \
  -m 256m \
  -display none \
  -nodefconfig -no-user-config -nodefaults \
  \
  -chardev stdio,id=charserial0,signal=off \
  -device isa-serial,chardev=charserial0,id=serial0 \
  \
  -chardev socket,id=con0,path=$TMP/vm-$name-console.pipe,server,nowait \
  -mon chardev=con0,mode=readline,default \
  \
  -kernel $LINUX \
  -append "init=/bin/sh console=ttyS0"
Of course, since there is no disk to boot from, the kernel will panic when trying to mount the root filesystem. KVM is configured to not display video output (-display none). A serial port is defined and uses stdio as a backend1. The kernel is configured to use this serial port as a console (console=ttyS0). A VirtIO console could have been used instead but it seems this is not possible to make it work early in the boot process. The KVM monitor is setup to listen on an Unix socket. It is possible to connect to it with socat UNIX:$TMP/vm-$name-console.pipe -.

Initial ramdisk UPDATED: I was initially unable to mount the host filesystem as the root filesystem for the guest directly by the kernel. In a comment, Josh Triplett told me to use /dev/root as the mount tag to solve this problem. I keep using an initrd in this post but the lab on Github has been updated to not use one. Here is how to build a small initial ramdisk:
# Setup initrd
setup_initrd()  
    info "Build initrd"
    DESTDIR=$TMP/initrd
    mkdir -p $DESTDIR
    # Setup busybox
    copy_exec $($WHICH busybox) /bin/busybox
    for applet in $($ DESTDIR /bin/busybox --list); do
        ln -s busybox $ DESTDIR /bin/$ applet 
    done
    # Setup init
    cp $PROGNAME $ DESTDIR /init
    cd "$ DESTDIR " && find .   \
       cpio --quiet -R 0:0 -o -H newc   \
       gzip > $TMP/initrd.gz
 
The copy_exec function is stolen from the initramfs-tools package in Debian. It will ensure that the appropriate libraries are also copied. Another solution would have been to use a static busybox. The setup script is copied as /init in the initial ramdisk. It will detect it has been invoked as such. If it was omitted, a shell would be spawned instead. Remove the cp call if you want to experiment manually. The flag -initrd allows KVM to use this initial ramdisk.

Root filesystem Let s mount our root filesystem using 9P. This is quite easy. First KVM needs to be configured to export the host filesystem to the guest:
kvm \
  $ PREVIOUS_ARGS  \
  -fsdev local,security_model=passthrough,id=fsdev-root,path=$ ROOT ,readonly \
  -device virtio-9p-pci,id=fs-root,fsdev=fsdev-root,mount_tag=rootshare
$ ROOT can either be / or any directory containing a complete filesystem. Mounting it from the guest is quite easy:
mkdir -p /target/ro
mount -t 9p rootshare /target/ro -o trans=virtio,version=9p2000.u
You should find a complete root filesystem inside /target/ro. I have used version=9p2000.u instead of version=9p2000.L because the later does not allow a program to mount() a host mount point2. Now, you have a read-only root filesystem (because you don t want to mess with your existing root filesystem and moreover, you did not run this lab as root, did you?). Let s use an union filesystem. Debian comes with AUFS while Ubuntu and OpenWRT have migrated to overlayfs. I was previously using AUFS but got errors on some specific cases. It is still not clear which one will end up in the kernel. So, let s try overlayfs. I didn t find any patchset ready to be applied on top of my kernel tree. I was working with David Miller s net-next tree. Here is how I have applied the overlayfs patch on top of it:
$ git remote add torvalds git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
$ git fetch torvalds
$ git remote add overlayfs git://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git
$ git fetch overlayfs
$ git merge-base overlayfs.v15 v3.6
4cbe5a555fa58a79b6ecbb6c531b8bab0650778d
$ git checkout -b net-next+overlayfs
$ git cherry-pick 4cbe5a555fa58a79b6ecbb6c531b8bab0650778d..overlayfs.v15
Don t forget to enable CONFIG_OVERLAYFS_FS in .config. Here is how I configured the whole root filesystem:
info "Setup overlayfs"
mkdir /target
mkdir /target/ro
mkdir /target/rw
mkdir /target/overlay
# Version 9p2000.u allows to access /dev, /sys and mount new
# partitions over them. This is not the case for 9p2000.L.
mount -t 9p        rootshare /target/ro      -o trans=virtio,version=9p2000.u
mount -t tmpfs     tmpfs     /target/rw      -o rw
mount -t overlayfs overlayfs /target/overlay -o lowerdir=/target/ro,upperdir=/target/rw
mount -n -t proc  proc /target/overlay/proc
mount -n -t sysfs sys  /target/overlay/sys
info "Mount home directory on /root"
mount -t 9p homeshare /target/overlay/root -o trans=virtio,version=9p2000.L,access=0,rw
info "Mount lab directory on /lab"
mkdir /target/overlay/lab
mount -t 9p labshare /target/overlay/lab -o trans=virtio,version=9p2000.L,access=0,rw
info "Chroot"
export STATE=1
cp "$PROGNAME" /target/overlay
exec chroot /target/overlay "$PROGNAME"
You have to export your $ HOME and the lab directory from host:
kvm \
  $ PREVIOUS_ARGS  \
  -fsdev local,security_model=passthrough,id=fsdev-root,path=$ ROOT ,readonly \
  -device virtio-9p-pci,id=fs-root,fsdev=fsdev-root,mount_tag=rootshare \
  -fsdev local,security_model=none,id=fsdev-home,path=$ HOME  \
  -device virtio-9p-pci,id=fs-home,fsdev=fsdev-home,mount_tag=homeshare \
  -fsdev local,security_model=none,id=fsdev-lab,path=$(dirname "$PROGNAME") \
  -device virtio-9p-pci,id=fs-lab,fsdev=fsdev-lab,mount_tag=labshare

Network You know what is missing from our network lab? Network setup. For each LAN that I will need, I spawn a VDE switch:
# Setup a VDE switch
setup_switch()  
    info "Setup switch $1"
    screen -t "sw-$1" \
        start-stop-daemon --make-pidfile --pidfile "$TMP/switch-$1.pid" \
        --start --startas $($WHICH vde_switch) -- \
        --sock "$TMP/switch-$1.sock"
    screen -X select 0
 
To attach an interface to the newly created LAN, I use:
mac=$(echo $name-$net   sha1sum   \
            awk ' print "52:54:" substr($1,0,2) ":" substr($1, 2, 2) ":" substr($1, 4, 2) ":" substr($1, 6, 2) ')
kvm \
  $ PREVIOUS_ARGS  \
  -net nic,model=virtio,macaddr=$mac,vlan=$net \
  -net vde,sock=$TMP/switch-$net.sock,vlan=$net
The use of a VDE switch allows me to run the lab as a non-root user. It is possible to give Internet access to each VM, either by using -net user flag or using slirpvde on a special switch. I prefer the latest solution since it will allow the VM to speak to each others.

Debugging This lab was mostly done to debug both the kernel and Quagga. Each of them can be debugged remotely.

Kernel debugging While the kernel features KGDB, its own debugger, compatible with GDB, it is easier to use the remote GDB server built inside KVM.
kvm \
  $ PREVIOUS_ARGS  \
  -gdb unix:$TMP/vm-$name-gdb.pipe,server,nowait
To connect to the remote GDB server from the host, first locate the vmlinux file at the root of the source tree and run GDB on it. The kernel has to be compiled with CONFIG_DEBUG_INFO=y to get the appropriate debugging symbols. Then, use socat with the Unix socket to attach to the remote debugger:
$ gdb vmlinux
GNU gdb (GDB) 7.4.1-debian
Reading symbols from /home/bernat/src/linux/vmlinux...done.
(gdb) target remote   socat UNIX:$TMP/vm-$name-gdb.pipe -
Remote debugging using   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-gdb.pipe -
native_safe_halt () at /home/bernat/src/linux/arch/x86/include/asm/irqflags.h:50
50   
(gdb)
You can now set breakpoints and resume the execution of the kernel. It is easier to debug the kernel if optimizations are not enabled. However, it is not possible to disable them globally. You can however disable them for some files. For example, to debug net/ipv6/route.c, just add CFLAGS_route.o = -O0 to net/ipv6/Makefile, remove net/ipv6/route.o and type make.

Userland debugging To debug a program inside KVM, you can just use gdb as usual. Your $HOME directory is available and it should be therefore straightforward. However, if you want to perform some remote debugging, that s quite easy. Add a new serial port to KVM:
kvm \
  $ PREVIOUS_ARGS  \
  -chardev socket,id=charserial1,path=$TMP/vm-$name-serial.pipe,server,nowait \
  -device isa-serial,chardev=charserial1,id=serial1
Starts gdbserver in the guest:
$ libtool execute gdbserver /dev/ttyS1 zebra/zebra
Process /root/code/orange/quagga/build/zebra/.libs/lt-zebra created; pid = 800
Remote debugging using /dev/ttyS1
And from the host, you can attach to the remote process:
$ libtool execute gdb zebra/zebra
GNU gdb (GDB) 7.4.1-debian
Reading symbols from /home/bernat/code/orange/quagga/build/zebra/.libs/lt-zebra...done.
(gdb) target remote   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-serial.pipe
Remote debugging using   socat UNIX:/tmp/tmp.W36qWnrCEj/vm-r1-serial.pipe
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib64/ld-linux-x86-64.so.2
0x00007ffff7dddaf0 in ?? () from /lib64/ld-linux-x86-64.so.2
(gdb)

Demo For a demo, have a look at the following video (it is also available as an Ogg Theora video).
<iframe frameborder="0" height="270" src="http://www.dailymotion.com/embed/video/xuglsg" width="480"></iframe>

  1. stdio is configured such that signals are not enabled. KVM won t stop when receiving SIGINT. This is important for the usage we want to have.
  2. Therefore, it is not possible to mound a fresh /proc on top of the existing one. I have searched a bit but didn t find why. Any comments on this is welcome.

22 August 2012

Gunnar Wolf: An industry commits suicide and blames us

[ once again, I am translating somebody else's material In this case, my good Costa Rican friend Carolina Flores. Please excuse my stylistic mistakes My English is far from native as you well know. But this material is worth sharing, and worth investing some tens of minutes doing a quick translation. If you can read Spanish, go read Caro's original entry ] Have you been to a music record store lately? I did so last Saturday, as a mere excercise. I was not planning on buying anything but I wanted to monitor things and confirm my suspicions. What was I suspicious of? First, that I would only find old records. And so it was: The only recent record I found was ...little broken hearts by Norah Jones. the second, that I would only find music for over 50 year old people. so it was: Were I there to look for a present for my father, I would have walked out with 10 good records. Third, that in the store nothing worth commenting would happen. About that last point, I should point out it was around 10 AM and the store had just opened its doors. Lets concede the benefit of doubt. I don't think many of you will remember, but in Barrio La California (where there is now a beauty parlour, almost in front of AM.PM) there was Auco Disco. In Auco Disco there was a guy specialized in rock (Mauricio Alice) and another one specialized in jazz (I don't remember his name). In that record you could always find rare records, but if they were not there, at least you were sure to find somebody to say: "No, we don't have that, but that's an excellent record, it's the best that [insert group here] have ever recorded because just afterwards they switched their guitar player, they had gone a bit south but with that record they are flying. But no, we don't have it; I can recommend you this record by [insert another group] because it has a guitar solo in track six that is amazing". It would happen more or less like that, which means, one would arrive to Auco Disco at 10 AM and leave around 5 PM with three new records, after having listened to a spectacular music selection. What happened to those stores? Were they killed by The Pirate Bay? That's the simplistic answer from the recording industry! The answer is that those stores never got anything from the industry but an invoice. The industry specially in prescindible markets such as ours was limited to hiring artists, taking care of them recording a sellable product, producing the object called record, and that's it. The more commercial radio stations were paid to program those songs as it cannot be casual that "Mosa, mosa" is the summer hit in all of Latin America, can it? but, record stores? Nothing. Lets carry on with that idea: Radio stations are paid to program said music. This idea should not lead us to believe that recording companies are to blame for bad taste. I won't reveal my sources, but I know the success of the "Locura autom tica" song by La Secta was a real example. Nobody paid for it. That song got to the number one because of its own merits(?) (you don't know the effort it took to find that thing, I cannot recommend it to you). Same thing happens with other stations that don't program reggaet n, that try to save the species, and where they play what we do like. But the thing is, everything we like is not available in any record store in this country. Then, even if we wanted to buy a record or give it as a gift to somebody, it is plain impossible. And don't tell me it's the same to present as a gift a link or a CD full of downloaded MP3 as it is to give a record with cover and booklet, wrapped in gift paper. I might be old-school, but the fetish object record still exists, not only because of its cover, but because of its sound. A 3MB MP3 is akin to drinking coffee dripping from a bag that has been used eight times with the same coffee beans. That format is the worst that has ever happened to music, and if we had any bit dignity we would never purchase digital files in Amazon or iTunes safe for MP3 with an acceptable compression level. That, if we could buy them, because not only that is allowed to us. As the musical industry has no interest in resolving ITS problem (that is not our problem, it is those companies') it has not even been resolved how to charge for a MP3 download that includes import fees (well, if downloading from here a MP3 from a USA-based file server can be considered importing goods into Costa Rica!!!) so we don't have to get dizzy entering into the nineties to Titi Online to discover there is nothing by Muse, Andrew Bird, The Killers, Death Cab for Cutie, Paramore, Bj rk... (believe me, I looked them all up, even Norah jones and La Secta. They also were not there). This all leads me to the question, which I present with all due respect (NOT): What the fuck do they expect us to do??? It is outrageous; above all because in the best case they will sell us a watery coffee download that won't allow us to get all the details a vinyl or less compression would give us. In the worst case, post-MP3 groups will end up recording music with no harmonics or hidden sounds, because, what for? Nobody will hear it. They even admit it: "Some musicians and audio engineers say the MP3 format is changing the way studios mix their recordings. They say the MP3 format "flattens" dynamics differences in tone and volume in a song. As a result, a great deal of new music sounds very much alike, and there is nothing as focusing to create a dynamic listening experience. Why working so hard in creating complex sound if nobody can detect it?" (Rolling Stone, The Death of High fidelity, December 26 2007, taken from here). That's why I am not surprised by Adri n's post regarding the sales of old records. The price has nothing to do with it. The causes are related to the fetish object record and what it means or does not mean for people that have never purchased one. Adri n also asks if somebody here keeps buying records. I answered that I would if the stores sold anything I like. I do it even after the nausea I feel while reading "This phonogram is an intellectual work protected in favor of its producer COPYING IT IN WHOLE OR PARTIALLY IS FORBIDDEN " (like that, uppercased, yelling to whoever is only guilty of having bought a record and defending the producer, not the artist). But I am sure that almost nobody buys records because doing so is no longer a gratifying experience; because if buying a record is clicking to wait 15 days for it to reach the mailbox, we prefer clicking on the download link. But there is another reason for people not to buy records anymore. In one of my talks regarding the dictatorship of the all rights reserved, I asked the 30 twenty-something-year-old students if any of them had ever bought a record. One answered he had, because he is an author and performer (cantautor in Spanish) and understands the effort that producing a record entails. The rest of them had never done so. Is it possible that said young people have never listened to real music? Is it possible that, were it not for concerts, what they consider music is a set of washed-out MP3 that are about to fill up 1TB of their computer? Does people no longer buy records because they cannot differentiate one sound from the other? It is not very clear for me where do I want to get to. The recording industry is despicable. An industry that instead of innovating sets its energy on suing adolescents for downloading songs, trying to pass laws restricting our freedoms in Internet, putting up DRMs making us hostages to our devices* and forcing us to listen just the aroma of music, deserves my whole contempt. If we add ot this that said industry won't allow us to legally download their breadcrumbs because it has not understood that Internet does not need a van crossing borders, besides my contempt they deserve my pity and my heartfelt condolence. But the condolence is also for music, real music, that which is not compressed under the shoe using a terrible format. It is also for independent musicians that have not realized that begging for a bit of space to that industry they just add to themselves the "despicable" tag, given they deserve the fruits of their work to enter their bank account. However, there are good things that have come out of this absurdity. Be it for those that have joined projects such as Aut mata (even if it is in MP3) and for dreams come true such as Musopen (that have achieved that the music that's in theory Public Domain becomes so in practice as well). Good for the Electronic Frontier Foundation and the list of lawyers willing to defend people accused of ilegally downloading music in the USA. Good for the Creative Commons licenses that allow free sharing. All those are growing solutions, although none of them allows me to buy the Panamanian Carlos M ndez's record. Thankfully, a friend of mine who knows I will never give a dime to Apple, bought the files for me in iTunes. I thank him deeply, although I would have prefered to go to Auco Disco and have Mauricio tell me that the 2007 EP I have from Carlos is better than the record he did on 2009. * My devices don't have DRM because I use free software. I also use the ogg file format. Image by verbeeldingskr8

21 July 2012

Andrew Pollock: [life] Sarah sans gallbladder

I can't remember exactly when, it was either pre-Zoe heart-related diagnostic imaging or during her pregnancy with Zoe, that it became apparent that Sarah had gall stones. Apparently it's not uncommon for women to develop them after pregnancy, and it turns out that Sarah is slightly more genetically predisposed to getting them to boot. So it wasn't terribly surprising when she started having some pain recently. She had another ultrasound to confirm it, and went off to see a surgeon. Apparently they care more about the symptoms than the number of stones in the gallbladder, and they don't bother removing the stones and keeping the gallbladder, so she was booked in for a cholecystectomy last Thursday. It was a pretty straightforward procedure, she was out of the operating room in just under an hour, and awake a bit over an hour after that. I've had a few friends need to have a cholecystectomy, and the photos on Wikipedia have always fascinated me, particularly this one. It seems so freaky to have a gallbladder full of that much gravel. It can't be comfortable. We're going to be able to pick up Sarah's stones on Monday, so it'll be interesting to see what they look like. Sarah's recovering now. It's one of the few surgeries she's had under general anaesthesia where she hasn't been sick afterwards, so that was a good start. The procedure was done laparoscopically, using four incisions, and to give themselves room to work, they inflate the body with carbon dioxide gas, which then has to work its way out of her body over the following days. So she's dealing with some bloating, bruising and swelling at the moment, but not doing too badly. Apparently if you're vertical for too long, the gas can work its way up to your diaphragm, which causes some shoulder pain, so the trick has been to walk around a bit, then lie down for a bit, and then rinse and repeat. She's not allowed to lift Zoe for 3-4 weeks, which presents a few challenges for us, but nothing insurmountable (hopefully). We converted Zoe's crib to a toddler bed a week before the surgery, and Zoe thought that was just the best thing ever, and has taken to it really well. I'm around at home for breakfast and dinner, so I can take care of the lifting into and out of the high chair for those meals, and Zoe can eat lunch at her little table. Zoe's also pretty good about getting into and out of the car by herself (not that Sarah can drive until she's off the painkillers) and she's normally in daycare two days a week. She's in daycare again on Monday, so Sarah won't have to deal with Zoe on her own until Tuesday, and we have backup daycare arrangements we can fall back on if need be. Zoe's been very good about Sarah not being able to pick her up, and been very gentle. It helped with the explanation that Sarah could show her all of her "owwies". The surgeon was saying that only 1 in 400 people need to alter their diet after removing their gallbladder, so we're hopeful that Sarah will do just fine without it. She'll certainly be more comfortable in the long run.

24 June 2012

Joey Hess: trying obnam

Obnam 1.0 was released during several months when I had no well-connected server to use for backups. Yesterday I installed a terabyte disk in a basement with a fiber optic network connection, so my backupless time is over. Now, granted, I have a very multi-layered approach to backups; all my data is stored in git, most of it with dozens of copies automatically maintained, and with archival data managed by git-annex. But I still like to have a "real" backup system underneath, to catch anything else. And to back up those parts of my user's data that I have not given them tools to put into git yet... My backup server is not my basement, so I need to securely encrypt the backups stored there. Encrypting your offsite backups is such a good idea that I've always been surprised at the paucity of tools to do it. I got by with duplicity for years, but it's increasingly creaky, and the few times I've needed to restore, it's been a terrific pain. So I'm excited to be trying Obnam today. So far I quite like it. The only real problem is that it can be slow, when there's a transatlantic link between the client and the server. Each file backed up requires several TCP round-trips, and the latency kills the bandwidth. Large files are still sent fast, and obnam uses little resources on either the client or server while running. And this mostly only affects the initial, full backup. But the encryption and ease of use more than make up for this. The real killer feature with Obnam's encryption isn't that it's industry-standard encryption with gpg, that can be trivially enabled with a single option (--encrypt-with=DEADBEEF). No, the great thing about it is its key management. I generate a new gpg key for each system I back up. This prevents systems reading each other's backups. But that means you have to backup the backup keys.. or when a system is lost, the backup would be inaccessible. With Obnam, I can instead just grant my personal gpg key access to the repository: obnam add-key --keyid 2512E3C7. Now both the machine's key and my gpg key can access the data. Great system; can't revoke access, but otherwise perfect. I liked this so much I stole the design and used it in git-annex too. :) I'm also pleased I can lock down .ssh/authorized_keys on my backup server, to prevent clients running arbitrary commands. Duplicity runs ad-hoc commands over ssh, which defeated me from ever locking it down. Obnam can be easily locked down, like this: command="/usr/lib/openssh/sftp-server" This could still be improved, since clients can still read the whole filesystem with sftp. I'd like to have something like git-annex's git-annex-shell, which can limit access to only a specific repository. Hmm, if Obnam had its own server-side program like this, it could stream backup data to it using a protocol that avoids the roundtrips needed by the SFTP protocol, and fix the latency issue too. Lars, I know you've been looking for a Haskell starter project ... perhaps this is it? :)

15 June 2012

Erich Schubert: Dritte Startbahn - 2 gewinnt!

Usually I don't post much about politics. And this even is a highly controversial issue. Please do not feel offended.
This weekend, there is an odd election in Munich. Outside of Munich, nearby the cities of Freising and Erding, there is the Munich airport. The company operating the airport is owned partially by the city of Munich, which gives the city a veto option.
The Munich airport has grown a lot. Everybody who has been flying a bit knows that big airports (such as Heathrow) are oft the worst. If anything goes wrong, you are busted, because it will take them a long time to resume operations. This just happened to me in Munich, where the luggage system was down, and no luggage arrived at the airplanes.
Yet, they want to take the airport further down this road, and make it even bigger: add two satellite terminals, and a third runway. I'm convinved that this will make the airport much worse for anybody around here. The security checkpoints will be even more crowded, the lines for the baggage drop-off too, and you will have to walk much further on the airport.
Up to now, the Munich airport was pretty good compared to others. In particular given that is is one of the largest in Europe! It is because it had been designed from the ground up for this size. Now they plan to screw it up.
But there are other arguments against this, not the egoist view of a traveller. The first is the money issue. The airport is continuously making losses. It's the taxpayer that has to pay for all of this - and the current cost estimation is 1200 million. This is not appropriate, in particular since history shows that you can take this x2 to x10 to get the real number. They should first get the airport into a financial stable condition, then plan on making it even bigger.
Then there are the numbers. Just like with any polticial large-scale project, the numbers are all fake. The current airport was planned to cost 800 million, in the end it was about 8550 million. The politicians happily lie to us. Because they want to push their pet projects. We must no longer accept such projects based on fake numbers and old predictions.
If you are already one of the 10 largest airports in Europe, can you really expect to grow even further?!? There is a natural limit to growth, unless you want to have every single passenger on the world first travel to Munich multiple times, then go to his final destination ...
One thing they seem to have completely neglected is that Berlin is currently getting a brand new airport. And very likely, this is going to divert quite some traffic away from Munich. Just like the Munich airport diverted a lot of traffic away from Frankfurt. To some extend because many people actually want to go to Berlin, not Munich, but they currently have to change planes here or in Frankfurt. So when Berlin finally is operational, this will have an impact on Munich.
And speaking of the Berlin airport, this is a good example to not trust the numbers and our politicians. It is another example of a way-over-budget, way-behind-time project the politicians screwed up badly and where they lied to us. If we should not have trusted them with Berlin, why should we trust them with the Munich addon?
A lot of people whose families have been living there for years will have to be resettled. Whole towns are going to disappear. An airport is huge. Yet, they cannot vote against it, because their small towns do not own shares of the airport. The polticians don't even talk to them, not even to their poltical representatives.
Last but not least, the airport is in a sensitive ecological area. The directly affected area is an European special protection area for wild birds. There are nature preserves nearby, and all this area already suffers badly from airport drainage, pollution and noise. When they built the airport originally, the replacement areas they setup were badly done, and are mostly inhabited by nettles and goldenrod (which is not even native to Europe). See this article in S ddeutsche Zeitung on the impact on nature. You can't replace the loss of the original habitats just by digging some pools and filling them with water ...
If you want more information, go to this page, by Bund Naturschutz.
This is not about progress ("Fortschritt"). That is a killer argument the politicians love, but it doesn't hold. Nobody is trying to shut down the airport. Munich will be better of by keeping the balance having both a reasonably sized airport (and in fact, the airport is already one of the 10 largest in Europe!) and preserving some nature to make it worth living here.
If you are located in Munich, please go vote against the airport extension, and enjoy the DEN-GER soccer game afterwards. Thank you.

Next.

Previous.